Continuous long-term monitoring of motor health is crucial for the early detection of abnormalities such as bearing faults (up to 51% of motor failures are attributed to bearing faults). Despite numerous methodologies proposed for bearing fault detection, most of them require normal (healthy) and abnormal (faulty) data for training. Even with the recent deep learning (DL) methodologies trained on the labeled data from the same machine, the classification accuracy significantly deteriorates when one or few conditions are altered. Furthermore, their performance suffers significantly or may entirely fail when they are tested on another machine with entirely different healthy and faulty signal patterns. To address this need, in this pilot study, we propose a zero-shot bearing fault detection method that can detect any fault on a new (target) machine regardless of the working conditions, sensor parameters, or fault characteristics. To accomplish this objective, a 1D Operational Generative Adversarial Network (Op-GAN) first characterizes the transition between normal and fault vibration signals of (a) source machine(s) under various conditions, sensor parameters, and fault types. Then for a target machine, the potential faulty signals can be generated, and over its actual healthy and synthesized faulty signals, a compact, and lightweight 1D Self-ONN fault detector can then be trained to detect the real faulty condition in real time whenever it occurs. To validate the proposed approach, a new benchmark dataset is created using two different motors working under different conditions and sensor locations. Experimental results demonstrate that this novel approach can accurately detect any bearing fault achieving an average recall rate of around 89% and 95% on two target machines regardless of its type, severity, and location.
translated by 谷歌翻译
The increase in the number of unmanned aerial vehicles a.k.a. drones pose several threats to public privacy, critical infrastructure and cyber security. Hence, detecting unauthorized drones is a significant problem which received attention in the last few years. In this paper, we present our experimental work on three drone detection methods (i.e., acoustic detection, radio frequency (RF) detection, and visual detection) to evaluate their efficacy in both indoor and outdoor environments. Owing to the limitations of these schemes, we present a novel encryption-based drone detection scheme that uses a two-stage verification of the drone's received signal strength indicator (RSSI) and the encryption key generated from the drone's position coordinates to reliably detect an unauthorized drone in the presence of authorized drones.
translated by 谷歌翻译
Data-driven modeling approaches such as jump tables are promising techniques to model populations of resistive random-access memory (ReRAM) or other emerging memory devices for hardware neural network simulations. As these tables rely on data interpolation, this work explores the open questions about their fidelity in relation to the stochastic device behavior they model. We study how various jump table device models impact the attained network performance estimates, a concept we define as modeling bias. Two methods of jump table device modeling, binning and Optuna-optimized binning, are explored using synthetic data with known distributions for benchmarking purposes, as well as experimental data obtained from TiOx ReRAM devices. Results on a multi-layer perceptron trained on MNIST show that device models based on binning can behave unpredictably particularly at low number of points in the device dataset, sometimes over-promising, sometimes under-promising target network accuracy. This paper also proposes device level metrics that indicate similar trends with the modeling bias metric at the network level. The proposed approach opens the possibility for future investigations into statistical device models with better performance, as well as experimentally verified modeling bias in different in-memory computing and neural network architectures.
translated by 谷歌翻译
Federated Learning (FL) is a machine learning paradigm that enables the training of a shared global model across distributed clients while keeping the training data local. While most prior work on designing systems for FL has focused on using stateful always running components, recent work has shown that components in an FL system can greatly benefit from the usage of serverless computing and Function-as-a-Service technologies. To this end, distributed training of models with severless FL systems can be more resource-efficient and cheaper than conventional FL systems. However, serverless FL systems still suffer from the presence of stragglers, i.e., slow clients due to their resource and statistical heterogeneity. While several strategies have been proposed for mitigating stragglers in FL, most methodologies do not account for the particular characteristics of serverless environments, i.e., cold-starts, performance variations, and the ephemeral stateless nature of the function instances. Towards this, we propose FedLesScan, a novel clustering-based semi-asynchronous training strategy, specifically tailored for serverless FL. FedLesScan dynamically adapts to the behaviour of clients and minimizes the effect of stragglers on the overall system. We implement our strategy by extending an open-source serverless FL system called FedLess. Moreover, we comprehensively evaluate our strategy using the 2nd generation Google Cloud Functions with four datasets and varying percentages of stragglers. Results from our experiments show that compared to other approaches FedLesScan reduces training time and cost by an average of 8% and 20% respectively while utilizing clients better with an average increase in the effective update ratio of 17.75%.
translated by 谷歌翻译
In this paper, we address the stochastic contextual linear bandit problem, where a decision maker is provided a context (a random set of actions drawn from a distribution). The expected reward of each action is specified by the inner product of the action and an unknown parameter. The goal is to design an algorithm that learns to play as close as possible to the unknown optimal policy after a number of action plays. This problem is considered more challenging than the linear bandit problem, which can be viewed as a contextual bandit problem with a \emph{fixed} context. Surprisingly, in this paper, we show that the stochastic contextual problem can be solved as if it is a linear bandit problem. In particular, we establish a novel reduction framework that converts every stochastic contextual linear bandit instance to a linear bandit instance, when the context distribution is known. When the context distribution is unknown, we establish an algorithm that reduces the stochastic contextual instance to a sequence of linear bandit instances with small misspecifications and achieves nearly the same worst-case regret bound as the algorithm that solves the misspecified linear bandit instances. As a consequence, our results imply a $O(d\sqrt{T\log T})$ high-probability regret bound for contextual linear bandits, making progress in resolving an open problem in (Li et al., 2019), (Li et al., 2021). Our reduction framework opens up a new way to approach stochastic contextual linear bandit problems, and enables improved regret bounds in a number of instances including the batch setting, contextual bandits with misspecifications, contextual bandits with sparse unknown parameters, and contextual bandits with adversarial corruption.
translated by 谷歌翻译
历史上,内容一直是用于研究在线社区语言的主要镜头。相反,本文重点介绍了社区的语言风格。虽然我们知道个人具有可区分的风格,但我们在这里询问社区是否具有可区分的风格。此外,尽管先前的工作依赖于风格的狭义定义,但我们采用了一个广泛的定义,涉及262个功能来分析来自3个社交媒体平台的9个在线社区的语言风格,讨论政治,电视和旅行。我们发现社区确实具有不同的风格。此外,样式是小组成员资格的出色预测指标(F-评分0.952和准确性96.09%)。虽然平均而言,它在统计学上等同于仅使用内容的预测,但它对于减少培训数据的弹性更具弹性。
translated by 谷歌翻译
人们认识到,感官感知和语言通过心理学,神经科学和感官语言学的众多研究具有互连。在这种丰富的背景下,我们询问在著作中使用感官语言是否是语言风格的一部分?从样式计量学研究的角度来看,这个问题很重要,在该研究中,已经探索了丰富的语言功能,但对与感觉语言相关的功能的关注不足。以此为目标,我们探索了关于歌词,小说和诗歌集合中的感官语言和风格的几个角度。例如,我们发现个人使用感官语言不是一种随机现象。选择可能涉及。同样,感官风格通常会随着时间的推移而稳定 - 转移非常小。此外,只需从具有感官术语的几百个句子中提取样式。我们还确定每种类型中的代表性和独特特征。例如,我们观察到,小说收集的前6个代表性特征中有4个涉及使用嗅觉语言的个人,我们希望他们使用非富特语言。
translated by 谷歌翻译
在本文中,我们提出了针对中央,局部和洗牌模型中随机线性匪徒问题的差异私有算法。在中心模型中,我们获得了与最佳非私有算法的遗憾,这意味着我们可以免费获得隐私。特别是,我们感到遗憾的是$ \ tilde {o}(\ sqrt {t}+\ frac {1} {\ epsilon})$匹配已知的私有线性匪徒的较低限制,而最佳以前已知的算法实现了$ \ tilde {o}(\ frac {1} {\ epsilon} \ sqrt {t})$。在当地情况下,我们感到遗憾的是$ \ tilde {o}(\ frac {1} {\ epsilon} {\ sqrt {t}} $,与常数$ \ epsilon $相匹配的非私人遗憾,但是当$ \ epsilon $很小时,会受到遗憾的处罚。在洗牌模型中,我们还遗憾地对$ \ tilde {o}(\ sqrt {t}+\ frac {1} {\ epsilon} {\ epsilon})$%$ \ epsilon $,如中心案例,而最佳情况是以前已知的算法对$ \ tilde {o}(\ frac {1} {\ epsilon} {t^{3/5}})$感到遗憾。我们的数值评估验证了我们的理论结果。
translated by 谷歌翻译
我们解决了选择性分类的问题,目标是在数据集的所需覆盖范围内实现最佳性能。最新的最新选择性方法通过引入单独的选择头或额外的弃权logit进行体系结构变化。在本文中,我们通过确认最先进的方法的出色性能归功于培训更具概括的分类器,为选择性分类提供了令人惊讶的结果。但是,他们的选择机制是次优的。我们认为,选择机制应植根于目标函数,而不是单独计算的分数。因此,在本文中,我们激发了一种基于分类设置的横向熵损失的替代选择策略,即logits的最大值。我们提出的选择策略在所有覆盖范围和所有数据集中都可以通过大幅度的边距获得更好的结果,而无需任何其他计算。最后,受到我们优越的选择机制的启发,我们建议通过熵最小化进一步正规化目标函数。我们提出的具有修改后损耗功能的最大选择选择可实现选择性分类的新最新结果。
translated by 谷歌翻译
上下文线性土匪是具有许多实际应用的丰富且理论上重要的模型。最近,这种设置对无线的应用程序引起了很多兴趣,在无线上,通信限制可能是性能瓶颈,尤其是当上下文来自大型$ d $维空间时。在本文中,我们考虑了一个分布式的无记忆上下文线性匪徒学习问题,在该问题中,观察上下文并采取行动的代理人在地理上与学习中的学习者而在看不到上下文的同时分离。我们假设上下文是从分布中生成的,并提出了一种方法,该方法对于未知上下文分布的情况使用$ \ \ 5D $位,如果已知上下文分布,则每上下文$ 0 $ bits $ 0 $位,同时实现了几乎相同的遗憾。好像可以直接观察到上下文。前者的界限通过$ \ log(t)$因素在现有界限上进行了改进,其中$ t $是地平线的长度,而后者则达到了信息理论的紧密度。
translated by 谷歌翻译